educational software
Proceedings 12th International Workshop on Theorem proving components for Educational software
Narboux, Julien, Neuper, Walther, Quaresma, Pedro
The ThEdu series pursues the smooth transition from an intuitive way of doing mathematics at secondary school to a more formal approach to the subject in STEM education, while favouring software support for this transition by exploiting the power of theorem-proving technologies. What follows is a brief description of how the present volume contributes to this enterprise. The 12th International Workshop on Theorem Proving Components for Educational Software(ThEdu'23), was a satellite event of the 29th international Conference on Automated Deduction (CADE 2023), July 1-4, 2023, Rome, Italy. ThEdu'23 was very successful, with one invited talk, by Yves Bertot (Inria, France), "The challenges of using Type Theory to teach Mathematics", and seven regular contributions. An open call for papers was then issued, to which eight contributions were submitted. Seven submissions have been accepted by our reviewers, who jointly produced at least three careful reports on each of the contributions. The resulting revised papers are collected in the present volume. We, the volume editors, hope that this collection of papers will further promote the development of theorem-proving based software, and that it will allow to improve the mutual understanding between computer scientists, mathematicians and stakeholders in education. PC Chairs:Julien Narboux (University of Strasbourg, France); Walther Neuper (JKU, Johannes Kepler University, Linz, Austria); Pedro Quaresma (University of Coimbra, Portugal)
- Education > Educational Technology > Educational Software (0.60)
- Education > Educational Setting > Online (0.60)
- Education > Curriculum > Subject-Specific Education (0.53)
On the Opportunities and Risks of Foundation Models
Bommasani, Rishi, Hudson, Drew A., Adeli, Ehsan, Altman, Russ, Arora, Simran, von Arx, Sydney, Bernstein, Michael S., Bohg, Jeannette, Bosselut, Antoine, Brunskill, Emma, Brynjolfsson, Erik, Buch, Shyamal, Card, Dallas, Castellon, Rodrigo, Chatterji, Niladri, Chen, Annie, Creel, Kathleen, Davis, Jared Quincy, Demszky, Dora, Donahue, Chris, Doumbouya, Moussa, Durmus, Esin, Ermon, Stefano, Etchemendy, John, Ethayarajh, Kawin, Fei-Fei, Li, Finn, Chelsea, Gale, Trevor, Gillespie, Lauren, Goel, Karan, Goodman, Noah, Grossman, Shelby, Guha, Neel, Hashimoto, Tatsunori, Henderson, Peter, Hewitt, John, Ho, Daniel E., Hong, Jenny, Hsu, Kyle, Huang, Jing, Icard, Thomas, Jain, Saahil, Jurafsky, Dan, Kalluri, Pratyusha, Karamcheti, Siddharth, Keeling, Geoff, Khani, Fereshte, Khattab, Omar, Kohd, Pang Wei, Krass, Mark, Krishna, Ranjay, Kuditipudi, Rohith, Kumar, Ananya, Ladhak, Faisal, Lee, Mina, Lee, Tony, Leskovec, Jure, Levent, Isabelle, Li, Xiang Lisa, Li, Xuechen, Ma, Tengyu, Malik, Ali, Manning, Christopher D., Mirchandani, Suvir, Mitchell, Eric, Munyikwa, Zanele, Nair, Suraj, Narayan, Avanika, Narayanan, Deepak, Newman, Ben, Nie, Allen, Niebles, Juan Carlos, Nilforoshan, Hamed, Nyarko, Julian, Ogut, Giray, Orr, Laurel, Papadimitriou, Isabel, Park, Joon Sung, Piech, Chris, Portelance, Eva, Potts, Christopher, Raghunathan, Aditi, Reich, Rob, Ren, Hongyu, Rong, Frieda, Roohani, Yusuf, Ruiz, Camilo, Ryan, Jack, Ré, Christopher, Sadigh, Dorsa, Sagawa, Shiori, Santhanam, Keshav, Shih, Andy, Srinivasan, Krishnan, Tamkin, Alex, Taori, Rohan, Thomas, Armin W., Tramèr, Florian, Wang, Rose E., Wang, William, Wu, Bohan, Wu, Jiajun, Wu, Yuhuai, Xie, Sang Michael, Yasunaga, Michihiro, You, Jiaxuan, Zaharia, Matei, Zhang, Michael, Zhang, Tianyi, Zhang, Xikun, Zhang, Yuhui, Zheng, Lucia, Zhou, Kaitlyn, Liang, Percy
AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.
- Europe > Germany (0.45)
- North America > United States > New York > New York County > New York City (0.27)
- Asia > China (0.27)
- (18 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- (2 more...)
- Social Sector (1.00)
- Media > News (1.00)
- Leisure & Entertainment > Games (1.00)
- (36 more...)
Active learning for online training in imbalanced data streams under cold start
Barata, Ricardo, Leite, Miguel, Pacheco, Ricardo, Sampaio, Marco O. P., Ascensão, João Tiago, Bizarro, Pedro
Labeled data is essential in modern systems that rely on Machine Learning (ML) for predictive modelling. Such systems may suffer from the cold-start problem: supervised models work well but, initially, there are no labels, which are costly or slow to obtain. This problem is even worse in imbalanced data scenarios. Online financial fraud detection is an example where labeling is: i) expensive, or ii) it suffers from long delays, if relying on victims filing complaints. The latter may not be viable if a model has to be in place immediately, so an option is to ask analysts to label events while minimizing the number of annotations to control costs. We propose an Active Learning (AL) annotation system for datasets with orders of magnitude of class imbalance, in a cold start streaming scenario. We present a computationally efficient Outlier-based Discriminative AL approach (ODAL) and design a novel 3-stage sequence of AL labeling policies where it is used as warm-up. Then, we perform empirical studies in four real world datasets, with various magnitudes of class imbalance. The results show that our method can more quickly reach a high performance model than standard AL policies. Its observed gains over random sampling can reach 80% and be competitive with policies with an unlimited annotation budget or additional historical data (with 1/10 to 1/50 of the labels).
- North America > United States > California > San Francisco County > San Francisco (0.28)
- North America > United States > New York (0.14)
- Asia > China (0.14)
- (4 more...)
- Education > Educational Setting > Online (1.00)
- Energy > Oil & Gas > Midstream (0.93)
- Materials > Chemicals > Industrial Gases > Liquified Gas (0.67)
- (3 more...)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
The Role of Social Movements, Coalitions, and Workers in Resisting Harmful Artificial Intelligence and Contributing to the Development of Responsible AI
There is mounting public concern over the influence that AI based systems has in our society. Coalitions in all sectors are acting worldwide to resist hamful applications of AI. From indigenous people addressing the lack of reliable data, to smart city stakeholders, to students protesting the academic relationships with sex trafficker and MIT donor Jeffery Epstein, the questionable ethics and values of those heavily investing in and profiting from AI are under global scrutiny. There are biased, wrongful, and disturbing assumptions embedded in AI algorithms that could get locked in without intervention. Our best human judgment is needed to contain AI's harmful impact. Perhaps one of the greatest contributions of AI will be to make us ultimately understand how important human wisdom truly is in life on earth.
- Research Report > New Finding (1.00)
- Press Release (1.00)
- Overview (1.00)
- (2 more...)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Telecommunications (1.00)
- (60 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Applied AI (1.00)
Litigating Artificial Intelligence: When Does AI Violate Our Legal Rights?
Litigating Artificial Intelligence: When Does AI Violate Our Legal Rights? Read full article May 27, 2021, 3:20 PM ·3 min read From the minds of Canada's leading law and technology experts comes a playbook for understanding the multi-faceted intersection of AI and the law TORONTO, May 27, 2021 (GLOBE NEWSWIRE) -- We are living in an Artificial Intelligence (AI) boom. Self-driving cars, personal voice assistants, and facial recognition technology are just a few of the AI-enabled technologies permeating into everyday life. But what happens when AI causes harm or violates our rights? If your self-driving car gets into an accident while on autopilot, are you responsible? Emond Publishing, Canada's leading independent legal publisher, today announced the release of Litigating Artificial Intelligence, a book examining AI-informed legal determinations, AI-based lawsuits, and AI-enabled litigation tools. Anchored by the expertise of general editors Jill R. Presser, Jesse Beatson, and Gerald Chan, this title offers practical insights regarding AI's decision-making capabilities, position in evidence law and product-based lawsuits, role in automating legal work, and use by the courts, tribunals, and government agencies. For example, can government agencies use AI-powered facial recognition software to identify BLM protestors and Capitol rioters, or does this violate privacy rights? Who is liable, users, developers, or AI? What laws are in place to prevent AI-related crimes, and how do litigators prosecute the responsible parties?
- Asia > Japan (0.28)
- North America > Canada > Ontario > Toronto (0.24)
- North America > United States > New York (0.14)
- (4 more...)
- Financial News (1.00)
- Research Report (0.68)
- Transportation > Ground > Road (1.00)
- Law > Litigation (1.00)
- Government > Regional Government (1.00)
- (6 more...)
AutonoML: Towards an Integrated Framework for Autonomous Machine Learning
Kedziora, David Jacob, Musial, Katarzyna, Gabrys, Bogdan
Over the last decade, the long-running endeavour to automate high-level processes in machine learning (ML) has risen to mainstream prominence, stimulated by advances in optimisation techniques and their impact on selecting ML models/algorithms. Central to this drive is the appeal of engineering a computational system that both discovers and deploys high-performance solutions to arbitrary ML problems with minimal human interaction. Beyond this, an even loftier goal is the pursuit of autonomy, which describes the capability of the system to independently adjust an ML solution over a lifetime of changing contexts. However, these ambitions are unlikely to be achieved in a robust manner without the broader synthesis of various mechanisms and theoretical frameworks, which, at the present time, remain scattered across numerous research threads. Accordingly, this review seeks to motivate a more expansive perspective on what constitutes an automated/autonomous ML system, alongside consideration of how best to consolidate those elements. In doing so, we survey developments in the following research areas: hyperparameter optimisation, multi-component models, neural architecture search, automated feature engineering, meta-learning, multi-level ensembling, dynamic adaptation, multi-objective evaluation, resource constraints, flexible user involvement, and the principles of generalisation. We also develop a conceptual framework throughout the review, augmented by each topic, to illustrate one possible way of fusing high-level mechanisms into an autonomous ML system. Ultimately, we conclude that the notion of architectural integration deserves more discussion, without which the field of automated ML risks stifling both its technical advantages and general uptake.
- Europe > France (0.45)
- Europe > Spain (0.45)
- Oceania > Australia > New South Wales > Sydney (0.14)
- (8 more...)
- Overview (1.00)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study (0.67)
- Instructional Material > Course Syllabus & Notes (0.45)
- Energy > Oil & Gas (0.92)
- Information Technology > Services (0.92)
- Health & Medicine > Therapeutic Area (0.67)
- Education > Educational Setting > Online (0.45)
Online Learning Based Risk-Averse Stochastic MPC of Constrained Linear Uncertain Systems
This paper investigates the problem of designing data-driven stochastic Model Predictive Control (MPC) for linear time-invariant systems under additive stochastic disturbance, whose probability distribution is unknown but can be partially inferred from data. We propose a novel online learning based risk-averse stochastic MPC framework in which Conditional Value-at-Risk (CVaR) constraints on system states are required to hold for a family of distributions called an ambiguity set. The ambiguity set is constructed from disturbance data by leveraging a Dirichlet process mixture model that is self-adaptive to the underlying data structure and complexity. Specifically, the structural property of multimodality is exploit-ed, so that the first- and second-order moment information of each mixture component is incorporated into the ambiguity set. A novel constraint tightening strategy is then developed based on an equivalent reformulation of distributionally ro-bust CVaR constraints over the proposed ambiguity set. As more data are gathered during the runtime of the controller, the ambiguity set is updated online using real-time disturbance data, which enables the risk-averse stochastic MPC to cope with time-varying disturbance distributions. The online variational inference algorithm employed does not require all collected data be learned from scratch, and therefore the proposed MPC is endowed with the guaranteed computational complexity of online learning. The guarantees on recursive feasibility and closed-loop stability of the proposed MPC are established via a safe update scheme. Numerical examples are used to illustrate the effectiveness and advantages of the proposed MPC.
- Research Report (0.90)
- Overview (0.68)
Adam with Bandit Sampling for Deep Learning
Liu, Rui, Wu, Tianyi, Mozafari, Barzan
Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called Adambs, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that Adambs improves the convergence rate of Adam---$O(\sqrt{\frac{\log n}{T} })$ instead of $O(\sqrt{\frac{n}{T}})$ in some cases. Experiments on various models and datasets demonstrate Adambs's fast convergence in practice.
- North America > United States > Michigan (0.14)
- North America > Canada (0.14)
Top 10 Reinforcement Learning Courses & Certifications in 2020
Reinforcement Learning is one of the most in demand research topics whose popularity is only growing day by day. An RL expert learns from experience, rather than being explicitly taught, which is essentially trial and error learning. To understand RL, Analytics Insight compiles the Top 10 Reinforcement Learning Courses and Certifications in 2020. The reinforcement learning specialization consists of four courses that explore the power of adaptive learning systems and artificial intelligence (AI). On this MOOC course, you will learn how Reinforcement Learning (RL) solutions help to solve real-world problems through trial-and-error interaction by implementing a complete RL solution.
- Education (1.00)
- Energy > Oil & Gas (0.50)
- Leisure & Entertainment > Games > Computer Games (0.31)
Online Adaptive Learning for Runtime Resource Management of Heterogeneous SoCs
Mandal, Sumit K., Ogras, Umit Y., Doppa, Janardhan Rao, Ayoub, Raid Z., Kishinevsky, Michael, Pande, Partha P.
Dynamic resource management has become one of the major areas of research in modern computer and communication system design due to lower power consumption and higher performance demands. The number of integrated cores, level of heterogeneity and amount of control knobs increase steadily. As a result, the system complexity is increasing faster than our ability to optimize and dynamically manage the resources. Moreover, offline approaches are sub-optimal due to workload variations and large volume of new applications unknown at design time. This paper first reviews recent online learning techniques for predicting system performance, power, and temperature. Then, we describe the use of predictive models for online control using two modern approaches: imitation learning (IL) and an explicit nonlinear model predictive control (NMPC). Evaluations on a commercial mobile platform with 16 benchmarks show that the IL approach successfully adapts the control policy to unknown applications. The explicit NMPC provides 25% energy savings compared to a state-of-the-art algorithm for multi-variable power management of modern GPU sub-systems.
- Research Report (0.50)
- Instructional Material > Online (0.41)
- Semiconductors & Electronics (0.94)
- Energy > Oil & Gas > Upstream (0.36)